知识图(kg)链接预测是人工智能中的一项基本任务,在自然语言处理,信息检索和生物医学中的应用。最近,通过使用结合知识图嵌入(KGE)和上下文语言模型(LMS)的合奏,通过利用KGS中的跨模式信息来实现有希望的结果。但是,现有的合奏要么是(1)在排名准确性提高方面并不始终有效,要么(2)由于与深度语言模型的成对排名的组合爆炸问题,在较大数据集上效率不佳。在本文中,我们提出了一种新型的分层排名架构级联,以保持完全结合的排名准确性,同时大大提高效率。 Cascader使用LMS来重新启动更有效的基本毛金属的输出,依靠自适应子集选择方案,旨在最小化LMS,同时最大程度地利用KGE的精度增益。广泛的实验表明,Cascader在KGE基线上最多可提高9分,从而在四个基准上设定新的最先进的性能,同时在竞争性跨模式基线上提高效率一个或多个数量级。我们的经验分析表明,模型跨模式的多样性和保存单个模型的置信度信号有助于解释级联者的有效性,并提出了跨模式级联体系结构的有希望的方向。可以在https://github.com/tsafavi/cascader上获得代码和预估计的模型。
translated by 谷歌翻译
Semantic segmentation works on the computer vision algorithm for assigning each pixel of an image into a class. The task of semantic segmentation should be performed with both accuracy and efficiency. Most of the existing deep FCNs yield to heavy computations and these networks are very power hungry, unsuitable for real-time applications on portable devices. This project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events. We compare the performance of real-time semantic segmentation models with non-real-time counterparts constrained by aerial images under oppositional settings. Furthermore, we train several models on the Flood-Net dataset, containing UAV images captured after Hurricane Harvey, and benchmark their execution on special classes such as flooded buildings vs. non-flooded buildings or flooded roads vs. non-flooded roads. In this project, we developed a real-time UNet based model and deployed that network on Jetson AGX Xavier module.
translated by 谷歌翻译
We explore unifying a neural segmenter with two-pass cascaded encoder ASR into a single model. A key challenge is allowing the segmenter (which runs in real-time, synchronously with the decoder) to finalize the 2nd pass (which runs 900 ms behind real-time) without introducing user-perceived latency or deletion errors during inference. We propose a design where the neural segmenter is integrated with the causal 1st pass decoder to emit a end-of-segment (EOS) signal in real-time. The EOS signal is then used to finalize the non-causal 2nd pass. We experiment with different ways to finalize the 2nd pass, and find that a novel dummy frame injection strategy allows for simultaneous high quality 2nd pass results and low finalization latency. On a real-world long-form captioning task (YouTube), we achieve 2.4% relative WER and 140 ms EOS latency gains over a baseline VAD-based segmenter with the same cascaded encoder.
translated by 谷歌翻译
Multi-object tracking is a cornerstone capability of any robotic system. Most approaches follow a tracking-by-detection paradigm. However, within this framework, detectors function in a low precision-high recall regime, ensuring a low number of false-negatives while producing a high rate of false-positives. This can negatively affect the tracking component by making data association and track lifecycle management more challenging. Additionally, false-negative detections due to difficult scenarios like occlusions can negatively affect tracking performance. Thus, we propose a method that learns shape and spatio-temporal affinities between consecutive frames to better distinguish between true-positive and false-positive detections and tracks, while compensating for false-negative detections. Our method provides a probabilistic matching of detections that leads to robust data association and track lifecycle management. We quantitatively evaluate our method through ablative experiments and on the nuScenes tracking benchmark where we achieve state-of-the-art results. Our method not only estimates accurate, high-quality tracks but also decreases the overall number of false-positive and false-negative tracks. Please see our project website for source code and demo videos: sites.google.com/view/shasta-3d-mot/home.
translated by 谷歌翻译
Self-supervised pre-training of a speech foundation model, followed by supervised fine-tuning, has shown impressive quality improvements on automatic speech recognition (ASR) tasks. Fine-tuning separate foundation models for many downstream tasks are expensive since the foundation model is usually very big. Parameter-efficient fine-tuning methods (e.g. adapter, sparse update methods) offer an alternative paradigm where a small set of parameters are updated to adapt the foundation model to new tasks. However, these methods still suffer from a high computational memory cost and slow training speed because they require backpropagation through the entire neural network at each step. In the paper, we analyze the performance of features at different layers of a foundation model on the speech recognition task and propose a novel hierarchical feature fusion method for resource-efficient transfer learning from speech foundation models. Experimental results show that the proposed method can achieve better performance on speech recognition task than existing algorithms with fewer number of trainable parameters, less computational memory cost and faster training speed. After combining with Adapters at all layers, the proposed method can achieve the same performance as fine-tuning the whole model with $97\%$ fewer trainable encoder parameters and $53\%$ faster training speed.
translated by 谷歌翻译
语言识别对于自动语音识别(ASR)中的许多下游任务至关重要,并且有益于将多语言端到端的ASR集成为附加任务。在本文中,我们建议通过集成每帧语言标识符(LID)预测器来修改基于层压编码器的复发神经网络传感器(RNN-T)模型的结构。带有级联编码器的RNN-T可以使用不右键的第一通用解码来实现较低延迟的流动ASR,并使用二频道解码使用更长的右文本实现较低的单词错误率(WERS)。通过利用当前文章中的这种差异和统计池的流传输实现,该建议的方法可以实现准确的流盖预测,而几乎没有额外的测试时间成本。语音搜索数据集的实验结果具有9个语言语言位置,表明所提出的方法平均达到96.2%的盖子预测准确性,而与输入中的Oracle盖相同的二次通用方法。
translated by 谷歌翻译
设备的端到端(E2E)模型已显示出对质量和延迟的英语语音搜索任务的常规模型的改进。 E2E模型还显示了多语言自动语音识别(ASR)的有希望的结果。在本文中,我们将以前的容量解决方案扩展到流应用程序,并提出流媒体多语言E2E ASR系统,该系统在设备上完全运行,质量和延迟与单个单语言模型相当。为了实现这一目标,我们提出了一个编码器端量模型和一个终端(EOU)联合层,以提高质量和延迟权衡。我们的系统以语言不可知论的方式构建,允许它实时支持本条件的代码切换。为了解决大型模型的可行性问题,我们进行了设备分析,并用最近开发的嵌入解码器代替了耗时的LSTM解码器。通过这些更改,我们设法在不到实时的时间内在移动设备上运行了这样的系统。
translated by 谷歌翻译
在启用语音的应用程序中,一个预定的热词在同时用来激活设备以便进行查询。 toavoid重复一个热词,我们提出了一个端到端的流(E2E)打算查询检测器,该查询检测器识别向设备指向的发音,并滤除针对设备的其他发出内容。提出的方法将预期的查询检测器置于E2E模型中,该模型将语音识别的不同组件折叠成一个神经网络。E2E对台面解码和预期的查询检测进行建模,也使我们可以基于早期的部分偏置检测结果, ,这对于减少潜伏期和使系统响应很重要。我们证明,与独立的预期检测器相比,检测准确性和600个MSLATENCE的相对相对改善的相对提高一级误差率(EER)的相对提高了22%。在我们的实验中,提出的模型检测用户正在用用户开始讲话后,用8.7%的Eerwithin与设备进行对话。
translated by 谷歌翻译
尽管流媒体助手系统已在许多应用中使用,但该系统通常集中于不自然的单次交互,假设来自单个语音查询的输入毫不犹豫地或不足。但是,除了反弹之外,常见的对话说法通常涉及多个转弯的查询。这些疏远包括暂停思考,犹豫,延长单词,填补的停顿和重复的短语。这使得通过对话演讲进行语音识别,其中包括有多个查询,这是一项具有挑战性的任务。为了更好地建模对话互动,至关重要的是,歧视汇率和查询的结束至关重要,以使用户能够在用户完成时,同时使系统尽快做出响应,以使用户保持地板的折衷。在本文中,我们提出了一个基于端到端(E2E)语音识别器的转折预测指标。我们的最佳系统是通过共同优化ASR任务并检测用户何时停止思考或完成口语来获得的。所提出的方法显示,在预测真正的转弯率的97%以上的召回率和85%的精度率中,在设计集中仅100毫秒延迟,设计了4种类型的对话说法中插入4种散布。
translated by 谷歌翻译
仔细的音频表示形式已成为许多语音任务方法设计的主要特征。这种方法越来越强调“解开”,其中表示形式仅包含与转录相关的一部分,同时丢弃无关信息。在本文中,我们基于ASR和TTS的联合建模构建了一项表示的学习任务,并试图学习音频的表示,该声音信号的一部分与该部分相关的一部分与该部分相关。我们提供了经验证据,表明成功找到这种表示形式与训练中固有的随机性有关。然后,我们观察到这些所需的,分散的解决方案对优化问题具有独特的统计特性。最后,我们表明,在训练期间执行这些特性会使我们的联合建模任务平均相对24.5%。这些观察结果激发了一种新颖的学习有效音频表示的方法。
translated by 谷歌翻译